Light field cameras capture the 3D information in a scene with a singleexposure. This special feature makes light field cameras very appealing for avariety of applications: from post-capture refocus, to depth estimation andimage-based rendering. However, light field cameras suffer by design fromstrong limitations in their spatial resolution, which should therefore beaugmented by computational methods. On the one hand, off-the-shelf single-frameand multi-frame super-resolution algorithms are not ideal for light field data,as they do not consider its particular structure. On the other hand, the fewsuper-resolution algorithms explicitly tailored for light field data exhibitsignificant limitations, such as the need to estimate an explicit disparity mapat each view. In this work we propose a new light field super-resolutionalgorithm meant to address these limitations. We adopt a multi-frame alikesuper-resolution approach, where the complementary information in the differentlight field views is used to augment the spatial resolution of the whole lightfield. We show that coupling the multi-frame approach with a graph regularizer,that enforces the light field structure via nonlocal self similarities, permitsto avoid the costly and challenging disparity estimation step for all theviews. Extensive experiments show that the new algorithm compares favorably tothe other state-of-the-art methods for light field super-resolution, both interms of PSNR and visual quality.
展开▼